在本文中,我们为两个静态的美国手语(ASL)手势分类任务(即ASL字母和ASL数字)开发了四个尖峰神经网络(SNN)模型。SNN模型部署在英特尔的神经形态平台上,然后与部署在边缘计算设备(Intel神经计算棒2(NCS2))上的等效深神经网络(DNN)模型进行了比较。在准确性,延迟,功耗和能源方面,我们进行了两种系统之间的全面比较。最佳DNN模型在ASL字母数据集上的精度为99.6%,而最佳性能SNN模型的精度为99.44%。对于ASL数字数据集,最好的SNN模型以99.52%的精度优于其所有DNN对应物。此外,我们获得的实验结果表明,与NCS2相比,Loihi神经形态硬件的实现分别可降低14.67倍和4.09倍。
translated by 谷歌翻译
Automatic differentiation (AD) is a technique for computing the derivative of a function represented by a program. This technique is considered as the de-facto standard for computing the differentiation in many machine learning and optimisation software tools. Despite the practicality of this technique, the performance of the differentiated programs, especially for functional languages and in the presence of vectors, is suboptimal. We present an AD system for a higher-order functional array-processing language. The core functional language underlying this system simultaneously supports both source-to-source forward-mode AD and global optimisations such as loop transformations. In combination, gradient computation with forward-mode AD can be as efficient as reverse mode, and the Jacobian matrices required for numerical algorithms such as Gauss-Newton and Levenberg-Marquardt can be efficiently computed.
translated by 谷歌翻译
贝叶斯优化(Bayesopt)是查询有效连续优化的黄金标准。然而,决策变量的离散,高维质阻碍了其对药物设计的采用。我们开发了一种新方法(LAMBO),该方法通过判别性多任务高斯流程主管共同训练Denoising AutoCododer,从而使基于梯度的多目标采集功能优化了自动装编码器的潜在空间。这些采集功能使Lambo能够在多个设计回合上平衡探索探索折衷方案,并通过在Pareto边境上的许多不同地点优化序列来平衡客观权衡。我们在两个小分子设计任务上评估了兰博,并引入了优化\ emph {在硅}和\ emph {Inter {In Betro}特性的新任务。在我们的实验中,兰博的表现优于遗传优化者,并且不需要大量的预处理,表明贝叶诺斯对生物序列设计是实用且有效的。
translated by 谷歌翻译
The purported "black box" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input.DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. Video tutorial: http://goo.gl/qKb7pL, ICML slides: bit.ly/deeplifticmlslides, ICML talk: https://vimeo.com/238275076, code: http://goo.gl/RM8jvH.
translated by 谷歌翻译